在电子健康记录(EHRS)中,不规则的时间序列(ITS)自然发生,这是由于患者健康动态而自然发生,这是由于医院不规则的探访,疾病/状况以及每次访问时测量不同生命迹象的必要性。其目前的培训挑战机器学习算法主要建立在相干固定尺寸特征空间的假设上。在本文中,我们提出了一种新型的连续患者状态感知器模型,称为铜,以应对其在EHR中。铜使用感知器模型和神经普通微分方程(ODE)的概念来学习患者状态的连续时间动态,即输入空间的连续性和输出空间的连续性。神经ODES可以帮助铜生成常规的时间序列,以进食感知器模型,该模型具有处理多模式大规模输入的能力。为了评估所提出的模型的性能,我们在模仿III数据集上使用院内死亡率预测任务,并仔细设计实验来研究不规则性。将结果与证明所提出模型的功效的基准进行了比较。
translated by 谷歌翻译
由于数据隐私问题,人类的医疗数据可能具有挑战性,难以进行某些类型的实验,或禁止的相关成本。在许多设置中,可以获得来自动物模型或体外细胞系的数据,以帮助增加我们对人类数据的理解。然而,与人类数据相比,该数据已知具有低病因有效性。在这项工作中,我们使用体外数据和动物模型增强了小型人类医疗数据集。我们使用不变的风险最小化(IRM)来阐明通过考虑属于不同数据生成环境的交叉器件数据来阐明不变的功能。我们的模型识别与人类癌症发展相关的基因。我们观察到不同于使用的人和小鼠数据的数量之间的一致性,但是需要进一步的工作来获得结论性见解。作为次要贡献,我们增强了现有的开源数据集,并提供了两个均匀加工,交叉生物的同源基因匹配的数据集。
translated by 谷歌翻译
We study the problem of planning under model uncertainty in an online meta-reinforcement learning (RL) setting where an agent is presented with a sequence of related tasks with limited interactions per task. The agent can use its experience in each task and across tasks to estimate both the transition model and the distribution over tasks. We propose an algorithm to meta-learn the underlying structure across tasks, utilize it to plan in each task, and upper-bound the regret of the planning loss. Our bound suggests that the average regret over tasks decreases as the number of tasks increases and as the tasks are more similar. In the classical single-task setting, it is known that the planning horizon should depend on the estimated model's accuracy, that is, on the number of samples within task. We generalize this finding to meta-RL and study this dependence of planning horizons on the number of tasks. Based on our theoretical findings, we derive heuristics for selecting slowly increasing discount factors, and we validate its significance empirically.
translated by 谷歌翻译
In this short note we derive a relationship between the Bregman divergence from the current policy to the optimal policy and the suboptimality of the current value function in a regularized Markov decision process. This result has implications for multi-task reinforcement learning, offline reinforcement learning, and regret analysis under function approximation, among others.
translated by 谷歌翻译
找到同一问题的不同解决方案是与创造力和对新颖情况的适应相关的智能的关键方面。在钢筋学习中,一套各种各样的政策对于勘探,转移,层次结构和鲁棒性有用。我们提出了各种各样的连续政策,一种发现在继承人功能空间中多样化的政策的方法,同时确保它们接近最佳。我们将问题形式形式化为受限制的马尔可夫决策过程(CMDP),目标是找到最大化多样性的政策,其特征在于内在的多样性奖励,同时对MDP的外在奖励保持近乎最佳。我们还分析了最近提出的稳健性和歧视奖励的绩效,并发现它们对程序的初始化敏感,并且可以收敛到次优溶液。为了缓解这一点,我们提出了新的明确多样性奖励,该奖励旨在最大限度地减少集合中策略的继承人特征之间的相关性。我们比较深度控制套件中的不同多样性机制,发现我们提出的明确多样性的类型对于发现不同的行为是重要的,例如不同的运动模式。
translated by 谷歌翻译
最大化马尔可夫和固定的累积奖励函数,即在国家行动对和时间独立于时间上定义,足以在马尔可夫决策过程(MDP)中捕获多种目标。但是,并非所有目标都可以以这种方式捕获。在本文中,我们研究了凸MDP,其中目标表示为固定分布的凸功能,并表明它们不能使用固定奖励函数进行配制。凸MDP将标准加强学习(RL)问题提出概括为一个更大的框架,其中包括许多受监督和无监督的RL问题,例如学徒学习,约束MDP和所谓的“纯探索”。我们的方法是使用Fenchel二重性将凸MDP问题重新将凸MDP问题重新制定为涉及政策和成本(负奖励)的最小游戏。我们提出了一个用于解决此问题的元偏金属,并表明它统一了文献中许多现有的算法。
translated by 谷歌翻译
我们研究如何构建一组可以组成的政策来解决一个加强学习任务的集合。每个任务都是不同的奖励函数,被定义为已知功能的线性组合。我们考虑一下我们呼吁改进政策的特定策略组合(SIPS):给定一套政策和一系列任务,SIP是前者的任何构成,其性能至少与其成分的表现相当好所有任务。我们专注于啜饮的最保守的实例化,Set-Max政策(SMPS),因此我们的分析扩展到任何SIP。这包括已知的策略组合运营商,如广义政策改进。我们的主要贡献是一种策略迭代算法,构建一组策略,以最大限度地提高所得SMP的最坏情况性能。该算法通过连续向集合添加新策略来工作。我们表明,生成的SMP的最坏情况性能严格地改善了每次迭代,并且算法仅在不存在导致改进性能的策略时停止。我们经验在网格世界上进行了验证评估了算法,也是来自DeepMind控制套件的一组域。我们确认了我们关于我们算法的单调性能的理论结果。有趣的是,我们还经验展示了算法计算的政策集是多样的,导致网格世界中的不同轨迹以及控制套件中的非常独特的运动技能。
translated by 谷歌翻译
In reinforcement learning the Q-values summarize the expected future rewards that the agent will attain. However, they cannot capture the epistemic uncertainty about those rewards. In this work we derive a new Bellman operator with associated fixed point we call the `knowledge values'. These K-values compress both the expected future rewards and the epistemic uncertainty into a single value, so that high uncertainty, high reward, or both, can yield high K-values. The key principle is to endow the agent with a risk-seeking utility function that is carefully tuned to balance exploration and exploitation. When the agent follows a Boltzmann policy over the K-values it yields a Bayes regret bound of $\tilde O(L \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the total number of states, $A$ is the number of actions, and $T$ is the number of elapsed timesteps. We show deep connections of this approach to the soft-max and maximum-entropy strands of research in reinforcement learning.
translated by 谷歌翻译
This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.
translated by 谷歌翻译